Back

Hearing Research

Elsevier BV

Preprints posted in the last 90 days, ranked by how well they match Hearing Research's content profile, based on 49 papers previously published here. The average preprint has a 0.02% match score for this journal, so anything above that is already an above-average fit.

1
Hearing sounds when the eyes move: A case study implicating the tensor tympani in eye movement-related peripheral auditory activity

King, C. D.; Zhu, T.; Groh, J. M.

2026-03-25 neuroscience 10.64898/2026.03.24.713974 medRxiv
Top 0.1%
22.2%
Show abstract

Information about eye movements is necessary for linking auditory and visual information across space. Recent work has suggested that such signals are incorporated into processing at the level of the ear itself (Gruters, Murphy et al. 2018). Here we report confirmation that the eye movement signals that reach the ear can produce perceptual consequences, via a case report of an unusual participant with tensor tympani myoclonus who hears sounds when she moves her eyes. The sounds she hears could be recorded with a microphone in the ear in which she hears them (left), and occurred for large leftward eye movements to extreme orbital positions of the eyes. The sounds elicited by this participants eye movements were reminiscent of eye movement-related eardrum oscillations (EMREOs, (Gruters, Murphy et al. 2018, Brohl and Kayser 2023, King, Lovich et al. 2023, Lovich, King et al. 2023, Lovich, King et al. 2023, Abbasi, King et al. 2025, Sotero Silva, Kayser et al. 2025, King and Groh 2026, Leon, Ramos et al. 2026, Sotero Silva, Brohl et al. 2026)), but were larger and longer lasting than classical EMREOs, helping to explain why they were audible to her. Overall, the observations from this patient help establish that (a) eye movement-related signals specifically reach the tensor tympani muscle and that (b) when there is an abnormality involving that muscle, such signals can lead to actual audible percepts. Given that the tensor tympani contributes to the regulation of sound transmission in the middle ear, these findings support that eye movement signals reaching the ear have functional consequences for auditory perception. The findings also expand the types of medical conditions that produce gaze-evoked tinnitus, to date most commonly observed in connection with acoustic neuromas.

2
Impaired development of the medial olivocochlear system in a KCNQ4-deficient mouse model.

Rias, E.; Ouwerkerk, I.; Spitzmaul, G.; Dionisio, L.

2026-01-23 neuroscience 10.64898/2026.01.21.700803 medRxiv
Top 0.1%
19.2%
Show abstract

The medial olivocochlear (MOC) efferent system modulates outer hair cell (OHC) excitability and protects cochlea from overstimulation. Cholinergic activation of 910 nicotinic acetylcholine receptors (nAChRs) triggers Ca{superscript 2} influx, activating BK and SK2 Ca{superscript 2}-dependent K channels, and K extrusion through KCNQ4 to restore membrane potential. KCNQ4-loss causes chronic depolarization, OHC dysfunction, and hearing loss. Here, we investigated how KCNQ4 deficiency affects cochlear efferent synapse development and organization. Using confocal immunofluorescence, we analyzed efferent innervation in the organ of Corti of Kcnq4-/- (KO) and Kcnq4+/+(WT) mice at 2, 3, 4, and 10 postnatal weeks (W). At 2 W, efferent terminals were similarly distributed between basal and lateral OHC membrane domains in both genotypes. During maturation, WT mice exhibited complete relocation of MOC terminals to the basal domain, whereas KO mice showed delayed maturation, with some terminals laterally displaced up to 10 W. KCNQ4 absence was associated with reduced number and volume of efferent boutons on OHCs. Milder morphometric alterations were observed in efferent boutons within the inner hair cell region. At the molecular level, qPCR revealed downregulation of 10 nAChR subunit, BK, and SK2 transcripts in KO at 4 W, with recovery to 10 W. Despite this recovery, BK protein showed reduced expression, mislocalization, and disorganized synaptic plaques in OHCs. KO also displayed age-dependent upregulation of the calcium-binding proteins calbindin and calretinin, suggesting compensatory responses to altered Ca+{superscript 2} homeostasis. Together, these findings demonstrate that KCNQ4 is essential for OHC repolarization, maturation and maintenance of cochlear efferent synapses. Graphical Abstract O_FIG O_LINKSMALLFIG WIDTH=179 HEIGHT=200 SRC="FIGDIR/small/700803v1_ufig1.gif" ALT="Figure 1"> View larger version (52K): org.highwire.dtl.DTLVardef@134e2caorg.highwire.dtl.DTLVardef@1155f45org.highwire.dtl.DTLVardef@21b4ccorg.highwire.dtl.DTLVardef@e4ee62_HPS_FORMAT_FIGEXP M_FIG C_FIG

3
Can Multimodal Large Language Models Visually Interpret Auditory Brainstem Responses?

Jedrzejczak, W.; Kochanek, K.; Skarzynski, H.

2026-04-17 otolaryngology 10.64898/2026.04.15.26350944 medRxiv
Top 0.1%
18.7%
Show abstract

Introduction: Auditory brainstem response (ABR) is a standard objective method for estimating hearing threshold, especially in patients who cannot reliably participate in behavioral audiometry. However, ABR interpretation is usually performed by an expert. This study evaluated whether two general-purpose artificial intelligence (AI) multimodal large language model (LLM) chatbots, ChatGPT and Qwen, can accurately estimate ABR hearing thresholds from ABR waveform images. The accuracy was measured by comparisons with the judgements of 3 expert audiologists. Methods: A total of 500 images each containing several ABR waveforms recorded at different stimulus intensities were analyzed. Three expert audiologists established the reference auditory thresholds based on visual identification of wave V at the lowest stimulus intensity, with the most frequent judgment among the three used as the reference. Each waveform image was independently submitted to ChatGPT (version 5.1) and Qwen (version 3Max) using the same standardized prompt and without additional clinical context. Agreement with the expert thresholds was assessed as mean errors and correlations. Sensitivity and specificity for detecting hearing loss (>20 dB nHL) were also calculated. In cases where the AI and expert thresholds nominally matched, corresponding latency measures were also compared. Results: Auditory thresholds derived from both LLMs correlated strongly with expert opinion, with Pearson r = 0.954 for ChatGPT and r = 0.958 for Qwen. ChatGPT showed a mean error of +5.5 dB and Qwen showed a mean error of -2.7 dB. Exact nominal agreement with expert values was achieved in 34.6% of ChatGPT estimates and 35.6% of Qwen estimates; agreement within +/-10 dB was observed in 75.6% and 80.0% of cases, respectively. For hearing-loss classification, ChatGPT achieved 100% sensitivity but low specificity (20.4%), whereas Qwen showed a more balanced profile with 91.6% sensitivity and 67.5% specificity. Curiously, estimates of wave V latency were markedly poor for both LLMs, with systematic underestimation and weak correlations with the expert judgements. Conclusion: ChatGPT and Qwen demonstrated a moderate ability to estimate ABR thresholds from waveform images, although their performance was not good enough for independent clinical use. Both models captured general patterns of hearing loss severity, but there was systematic bias, limited specificity and sensitivity balance, and poor latency estimation. General-purpose multimodal LLMs may have potential as assistive or preliminary tools, but clinically reliable ABR interpretation will likely require specialized, domain-trained AI systems with expert oversight.

4
Peripheral phoneme encoding and discrimination in aging and hearing impairment

Wouters, M.; Gaudrain, E.; Dapper, K.; Schirmer, J.; Baskent, D.; Ruettiger, L.; Knipper, M.; Verhulst, S.

2026-01-28 neuroscience 10.64898/2026.01.27.702044 medRxiv
Top 0.1%
18.6%
Show abstract

Speech perception difficulties in noise are common among older adults and individuals with hearing impairment, even when audiometric thresholds appear normal. We examined how aging, cochlear synaptopathy (CS), and outer hair cell (OHC) damage affect speech encoding and phoneme discrimination. Envelope-following responses (EFRs) to rectangular amplitude-modulated (RAM) tones and speech-like phoneme pairs were recorded in quiet using EEG, and behavioral discrimination was assessed in quiet, ipsilateral, and contralateral noise. Stimuli were designed to target temporal envelope (TENV) or temporal fine structure (TFS) encoding. Results showed that RAM-EFR amplitudes decreased gradually with age, consistent with emerging CS, while magnitudes of high-frequency TENV-based EFRs in quiet were most reduced in older hearing-impaired listeners with combined CS and OHC damage. In contrast, EFRs targeting low-frequency TENV encoding in quiet remained preserved. Behaviorally, phoneme discrimination of TFS contrasts worsened with OHC loss and age in quiet and contralateral noise, respectively, while there was no significant effect of age on the discrimination of TENV contrasts. Considering that high-frequency contrasts are discriminated via place-based spectral cues, low-frequency contrasts rely on TFS, and the EFR reflects primarily TENV, this framework explains why EFRs decline for high-frequency cues without perceptual loss, while EFRs remain stable for low-frequency cues even as TFS-based discrimination deteriorates. These findings highlight the need for further investigation into how neural coding deficits relate to perceptual outcomes. Combining electro-physiological and behavioral measures might provide a sensitive framework for detecting subclinical auditory deficits to earlier diagnose age-related and hidden hearing loss. HighlightsO_LISpeech-evoked EEG shows OHC loss-related decline of high-CF enve- lope encoding. C_LIO_LISpeech-evoked EEG shows low-CF envelope encoding stays intact with age. C_LIO_LIFine-structure contrast discrimination worsens with OHC loss in quiet. C_LIO_LIFine-structure contrast discrimination worsens with age in contralateral noise. C_LIO_LIHigh-frequency place-based spectral cues discrimination remains robust with age. C_LIO_LIPeripheral coding strength is not directly reflected at behavioral level. C_LI

5
EEG correlates of auditory rise time processing: A systematic review

Manasevich, V.; Kostanian, D.; Rogachev, A.; Sysoeva, O.

2026-03-09 neuroscience 10.64898/2026.03.06.710012 medRxiv
Top 0.1%
15.1%
Show abstract

Rise time (RT) is considered to be one of the most significant acoustical characteristics of auditory speech stimuli. A substantial amount of data has been accumulated on the neurophysiological mechanisms of RT processing under different conditions and in different groups of people, but these data have not been systematised. This review focuses on studies that have investigated electroencephalographic (EEG) markers of RT sensitivity. The present literature search was conducted according to the PRISMA statement in PubMed, Web of Science and APA PsychInfo databases. The resultant review comprised 37 studies that considered diverse aspects of RT processing. The review describes the main stimulation parameters affecting electrophysiological markers of RT processing reflected in different components of event-related potentials, brainstem responses and cortical rhythmic activity. The main finding of this review is that the rise time prolongation leads to a decrease in the amplitude of the main ERP components and an increase in their latencies. However, the sensitivity of the EEG markers varied with the earliest components tracking the subtle difference (few tens of microseconds), while the later components coding the larger one (up to 500 ms). Nevertheless, the observed effects may vary and depend on some aspects of the experimental paradigm, age of participants and speech-related problems. Future research may benefit by addressing understudied clinical groups and ERP components such as P1 and N2, dominated in children.

6
Immune response to spiral ganglion neuron death in rats during development and after kanamycin-induced deafening

Caro, A. M.; Zhang, Z.; Gansemer, B. M.; Green, S. H.

2026-03-13 neuroscience 10.64898/2026.03.10.710901 medRxiv
Top 0.1%
14.5%
Show abstract

AO_SCPLOWBSTRACTC_SCPLOWSpiral ganglion neurons (SGNs) constitute the sole afferent connection between cochlear hair cells and central auditory nuclei. SGNs die during postnatal developmental pruning, and also following hair cell death, which can be triggered by ototoxic agents such as aminoglycoside antibiotics, including kanamycin. After hair cell loss, animal models show extensive SGN degeneration occurring gradually over a period of weeks to months. Here, we compared spatial and temporal patterns of SGN loss and immune cell involvement in these two cases of cell death in rats. Developmental SGN pruning occurred from postnatal day 5 (P5) to P8 in the basal half of the cochlea, and from P5 to P12 in the apical half. This was accompanied by a transient increase in spiral ganglion macrophages temporally and spatially correlated with SGN death, consistent with a role clearing degenerating neurons. After deafening neonatal rats with kanamycin injections, SGN death became evident at approximately 5.5 weeks of age and persisted throughout the ganglion, with greatest loss in the middle regions; less in the base and apex. Macrophage numbers also increased but neither temporally nor spatially correlated with SGN death. Rather, increased macrophage number and activation began approximately three weeks before SGN death and was highest in the apex. Additionally, T-cells and NK cells appeared in the ganglion concurrently with SGN degeneration. These observations suggest fundamentally different roles for macrophages post-deafening than during developmental pruning and, with prior observations that anti-inflammatory drugs reduce SGN death, support a causal role for immune responses in SGN death post-deafening.

7
Trial-By-Trial Auditory Brainstem Response Detection

Liu, G. S.; Ali, N.-E.-S.; O Maoileidigh, D.

2026-02-03 physiology 10.64898/2026.01.31.703019 medRxiv
Top 0.1%
14.4%
Show abstract

The neural response of the brainstem to brief sounds, known as the auditory brainstem response (ABR), is widely employed in the laboratory and the clinic to diagnose hearing loss. In contrast to behavioral methods that assess hearing using responses to sounds on a trial-by-trial basis, current ABR approaches are limited to analyzing the average ABR over hundreds of trials. Historically, trial-by-trial ABR analysis has not been possible owing to each trials small signal-to-noise ratio. Here we overcome this limitation and show how to classify individual ABR trials as detected or undetected. We use the distribution of single-trial ABRs to assess supra-threshold hearing and to define psychophysics-like thresholds, which we call auditory brainstem detection (ABD) thresholds. ABD thresholds decrease as more of the ABR epoch is taken into account, whereas traditional ABR thresholds do not change. Above the ABD thresholds and below 90 dB SPL, signal detection is significantly improved by utilizing more of the ABR epoch. Our method also allows us to rank the supra-threshold hearing ability of individual subjects. Despite having normal ABR thresholds, some subjects appear to have supra-threshold hearing deficits. The trial-by-trial method demonstrates that signal detection by the ensemble of auditory neurons in the brainstem is intrinsically stochastic not only at low stimulus levels, but also at levels up to 100 dB SPL. Significance StatementNeural responses to sound can be measured by electrodes placed on a subjects head and are commonly used in the laboratory and the clinic to assess hearing. Although the auditory system must distinguish each sound stimulus from intrinsic noise, current methods for ana-lyzing the response of the brainstem to sound only utilize the average response to hundreds of stimuli. Here we overcome this constraint by showing how to classify an individual sound stimulus as detected or undetected based on each auditory brainstem response. This ap-proach can assess hearing at all stimulus levels, indicates that subjects with normal hearing thresholds can exhibit supra-threshold hearing loss, and potentially extends the types of hearing deficits that can be diagnosed using auditory evoked potentials.

8
Population decoding of sound source location by receptive field neurons in the mouse superior colliculus

Mullen, B. R.; Litke, A. M.; Feldheim, D. A.

2026-01-27 neuroscience 10.64898/2026.01.26.701861 medRxiv
Top 0.1%
14.0%
Show abstract

Identifying the location of a sound source in a complex environment and assessing its importance can be crucial for survival. The superior colliculus (SC), a midbrain structure involved in sensorimotor functions, contributes to sound localization and contains auditory responsive neurons that have spatially restricted receptive fields (RFs) that are organized into a topographic map along the azimuth. However, individual auditory SC neurons have large spatial RFs, are noisy, and do not respond to the same stimulus at each trial. Therefore, when an animal is presented with a "single trial" sound, and it needs to rely on a single neuron to locate the sound source direction, the location measurement may be erroneous, missing, or have poor spatial resolution. It is expected that a more reliable and accurate determination of the sound source location will come from a population of neurons. We therefore built a population pattern Maximum Likelihood Estimation (MLE) decoder to build a model that can accurately predict the location of a stimulus given the population response. We compared three models that use either strict firing rate (FR), weighting based on equal (EW) or mutual information (MIW) and show that the MIW model works best, needing only 92 neurons to localize a stimulus with behaviorally relevant precision. Furthermore, by comparing the models fit using the responses from non-RF and RF auditory neurons, we show that only RF neurons contain the information needed to localize a sound source. These results are consistent with the hypothesis that the SC uses a population of RF neurons to determine sound source location. Author SummaryBeing able to tell where a sound is coming from and how important it is can be critical for survival. The superior colliculus, a midbrain region involved in orienting behaviors, contains neurons that respond best to sounds coming from specific locations. This suggests that the combined activity of many neurons in the SC is used to determine sound location from a single sound event. To test this idea, we modeled responses from mouse SC neurons while sounds were played from different positions in space, both along the elevation and horizon. A model that weighted the most informative neurons performed best in both directions needing only 92 neurons to localize a stimulus with behaviorally relevant precision along the azimuth. Comparing the models fit using the responses from non-RF and RF auditory neurons, we show that only RF neurons contain the information needed to localize a sound source Overall, our findings show that the SC can accurately locate sounds in both horizontal and vertical space using a population-based strategy, providing a simple and effective solution for rapid sound localization.

9
Saccade-related sound pulses and phase-resetting contribute to eye movement-related eardrum oscillations (EMREOs)

King, C. D.; Groh, J. M.

2026-03-27 neuroscience 10.64898/2026.03.25.714060 medRxiv
Top 0.1%
10.2%
Show abstract

Eye movement-related eardrum oscillations (EMREOs) appear to consist of a pulse of oscillation occurring in conjunction with saccades. However, this apparent pulse could occur either because there is an increase in energy at that frequency at the time of saccades (a true pulse), or because there is saccade-related phase resetting of ongoing energy at that frequency band, thus appearing like a pulse when averaged in the time domain across many trials. Here we conducted a spectral analysis at the individual trial level in humans performing a visually guided saccade task to determine whether the power at the EMREO frequency (30-45 Hz) is higher during saccades than during steady fixation. We found both an increase in sound power in the EMREO frequency band associated with saccades, i.e. sound pulses at the individual trial level, as well as, phase resetting at saccade onset/offset. While both factors contribute to the apparently pulse-like EMREO signal, phase resetting appears to be more prevalent across participants. The prevalence of phase resetting has implications for the underlying mechanism(s) producing EMREOs as well as functional consequences for how the ear might respond to incoming sound in an eye-position dependent fashion.

10
Improving Automated Diagnosis of Middle and Inner Ear Pathologies by Estimating Middle Ear Input Impedance from Wideband Tympanometry

Kamau, A. F.; Merchant, G. R.; Nakajima, H. H.; Neely, S. T.

2026-03-31 otolaryngology 10.64898/2026.03.26.26349034 medRxiv
Top 0.1%
8.5%
Show abstract

Conductive hearing loss (CHL) with a normal otoscopic exam can be difficult to diagnose because routine clinical measures such as audiometric air-bone gaps (ABGs) can identify a conductive component but often cannot distinguish among specific underlying mechanical pathologies (e.g., stapes fixation versus superior canal dehiscence, which may produce similar audiograms). Wideband tympanometry (WBT) is a fast, noninvasive test that can provide additional mechanical information across a broad range of frequencies (200 Hz to 8 kHz). However, WBT metrics are influenced by variations in ear canal geometry and probe placement and can be challenging to interpret clinically. In this study, we extend prior WBT absorbance-based classification work by estimating the middle ear input impedance at the tympanic membrane (ZME), a WBT-derived metric intended to reduce ear canal effects. To estimate ZME, we fit an analog circuit model of the ear canal, middle ear, and inner ear to raw WBT data collected at tympanometric peak pressure (TPP). Data from 27 normal ears, 32 ears with superior canal dehiscence, and 38 ears with stapes fixation were analyzed. A multinomial logistic regression classifier was trained using principal component analysis (retaining 90% variance) and stratified 5-fold cross-validation with regularization. We compared feature sets based on ABGs alone, ABGs combined with absorbance, and ABGs combined with the magnitude of ZME. The combination of ABGs and the magnitude of ZME produced the best performance, achieving an overall accuracy of 85.6% compared to 80.4% for ABGs alone and 78.4% for ABGs combined with absorbance. These results suggest that incorporating model-derived middle ear impedance features with standard audiometric measures (ABGs) can improve automated pathology classification for stapes fixation and superior canal dehiscence.

11
Impacts of heminode disruption on auditory processing of noisy sound stimuli

Tripathy, S.; Budak, M.; Maddox, R.; Mehta, A. H.; Roberts, M. T.; Corfas, G.; Booth, V.; Zochowski, M.

2026-02-04 neuroscience 10.64898/2026.02.02.703242 medRxiv
Top 0.1%
8.3%
Show abstract

Hidden hearing loss (HHL) is an auditory neuropathy characterized by altered auditory nerve responses despite normal hearing thresholds. Recent experimental and computational studies suggest that permanent disruptions to heminode positions in spiral ganglion neuron (SGN) fibers can contribute to these deficits. However, the interaction between heminode disruption and noisy backgrounds ubiquitous in daily listening remains unexplored. This study investigates how background noise affects auditory processing with these peripheral disorders and how deficits propagate to downstream sound localization circuits in the superior olivary complex. We developed computational models of SGN fibers with mild and severe degrees of heminode disruption, subjected to sinusoidal tone stimuli in the presence of background noise with varying spectral characteristics. We analyzed the phase-locking of SGN fiber responses to the stimulus tone and modeled the subsequent effects on interaural time difference (ITD) sensitivity in the medial superior olive (MSO) using a binaural localization network. We found that near-tone-frequency noise disrupted SGN phase locking through cycle-to-cycle variability in spike phases, with effects consistent across tone frequencies. Mild heminode disruption produced frequency-dependent degradation in SGN phase locking, with effects observed only at higher frequencies tested (600-1000 Hz), without reducing overall firing rates. Critically, the effects of noise and heminode disruption were additive, with combined exposure leading to reduced ITD sensitivity and large temporal fluctuations in MSO responses. Severe heminode disruption, which additionally reduced firing rates at the SGN fibers and subsequent stages, produced profound localization deficits across all frequencies tested. Thus, our model results suggest that noisy environments exacerbate auditory deficits from peripheral disorders implicated in HHL and could potentially impair speech intelligibility through degradation in localization ability. This model may be useful for understanding the downstream impacts of SGN neuropathies.

12
A meta-analysis of bone conduction 80 Hz auditory steady state response thresholds for adults and infants with normal hearing

Perugia, E.; Georga, C.

2026-02-14 otolaryngology 10.64898/2026.02.12.26346168 medRxiv
Top 0.1%
7.2%
Show abstract

BackgroundAuditory steady-state responses (ASSRs) provide an objective method for estimating hearing thresholds in individuals unable to provide behavioural responses. Bone conduction (BC) testing is required to differentiate conductive from sensorineural hearing loss. Accurate BC ASSR threshold estimation relies on "correction" factors, which are not yet well established. This meta-analysis evaluated the reliability of BC ASSR thresholds to estimate hearing thresholds at 500, 1000, 2000 and 4000 Hz. MethodsA systematic search of PubMed, the Cochrane Library, and Embase was conducted to identify studies involving normal-hearing (NH) and hearing-impaired (HI) participants of all ages. Outcomes were (1) the difference between ASSR behavioural and ASSR thresholds, and (2) ASSR thresholds. The risk of bias was evaluated using the Newcastle-Ottawa Scale. The mean and 95% confidence intervals (CI) were calculated for the thresholds at the four frequencies. The certainty of the evidence was assessed using GRADE approach. ResultsOf records identified, 11 records met the inclusion criteria, yielding a total of 27 studies. Sample sizes ranged from 60 to 249 participants across frequencies and age groups. The quality of records ranged from low to high. Data were synthesised using random-effects models due to heterogeneity. In NH adults, the mean differences ({+/-}95% CI) between BC ASSR thresholds and behavioural thresholds were 17.0 ({+/-}4.8), 15.5 ({+/-}6.0), 13.4 ({+/-}3.3), and 12.1 ({+/-}4.1) dB at 500, 1000, 2000, and 4000 Hz, respectively. In NH infants, mean ({+/-}95% CI) BC ASSR thresholds were 17.2 ({+/-}2.2), 10.5 ({+/-}3.6), 26.4 ({+/-}2.7), and 19.9 ({+/-}4.0) dB HL at the same frequencies. The certainty of the evidence was very low. ConclusionsBC ASSR can be a reliable method for estimating BC thresholds. However, age and frequency significantly impact BC ASSR thresholds, highlighting the need to develop of "correction" factors to accurately predict BC behavioural thresholds. RegistrationPROSPERO CRD42023422150.

13
Chronic acoustic degradation via cochlear implants alters predictive processing of audiovisual speech

Gastaldon, S.; Gheller, F.; Bonfiglio, N.; Brotto, D.; Bottari, D.; Trevisi, P.; Martini, A.; Vespignani, F.; Peressotti, F.

2026-01-27 neuroscience 10.64898/2026.01.25.701504 medRxiv
Top 0.1%
7.1%
Show abstract

This study provides the first neurophysiological evidence of how cochlear implant (CI) input affects predictive processing during audiovisual language comprehension in deaf individuals. Using EEG, we compared 18 CI users with 18 normal-hearing (NH) controls during sentence comprehension where final word predictability was determined by high or low semantic constraint (HC vs. LC) of the preceding sentence frame. Between sentence frame and final word, a 800 ms silent gap was introduced. Mouth visibility was manipulated during sentence frames (visible or digitally occluded; V+ vs. V-), while the final words were always presented with the mouth visible. In NH participants, lower-beta power (12-15 Hz) in left frontal and central sensors decreased for HC vs. LC contexts during the pre-target silent gap, but only when the mouths was visible, suggesting active prediction generation. In CI users, this lower beta power decrease was absent. After final word presentation, both groups showed N400 predictability effects, indicating preserved prediction evaluation. However, CI users exhibited extended N400 effects in the V+ condition, suggesting additional processing demands. Across all participants, pre-target beta modulations correlated with language production abilities, supporting prediction-by-production frameworks. Within CI users, poorer audiometric thresholds correlated with larger N400 constraint effects, possibly indicating greater reliance on contextual prediction to compensate for degraded sensory input. These findings demonstrate that CI-mediated perception alters the neural mechanisms of prediction generation. The link between production skills and predictive mechanisms suggests that strengthening expressive language abilities may enhance predictive processing in CI users.

14
Multi-site MRI analysis of morphometric differences in brain regions in the presence of hearing loss and tinnitus across the adult lifespan

Abraham, I.; Ajmera, S.; Zhang, W.; Leaver, A. M.; Sutton, B. P.; Peelle, J. E.; Husain, F. T.

2026-03-10 neuroscience 10.64898/2026.03.06.710136 medRxiv
Top 0.1%
6.9%
Show abstract

The impact of age and hearing loss on the brain has garnered significant attention, as both factors have been implicated in the development of cognitive impairment or dementia. In this study, we investigated the impact of hearing loss and tinnitus on gray matter in the brain, while accounting for age. We used a comprehensive secondary analysis of structural MRI data obtained from multiple research sites (256 unique individuals) using voxel-based and surface-based morphology. After harmonization of this multi-site brain data, our research replicated the previously reported finding of age-related decline in total cortical volume, but there was no significant effect of either hearing loss or tinnitus on total cortical volume. When a region of interest analysis was conducted, the hippocampus emerged as the only brain region that showed a direct impact of hearing loss, after accounting for variance associated with age. This effect on hippocampal volume was evident in our sample from age 52 years onwards; when adjusted for hearing loss, the decline began at age 56 years. For the presence of tinnitus, ventral posterior cingulate gyrus showed main effects with respect to cortical volume and surface area while medial occipito-temporal gyrus and operculum of the inferior frontal gyrus showed significant main effects only with surface area. Post-hoc analysis revealed that posterior cingulate gyrus showed significantly higher volume and larger surface area in individuals with tinnitus compared to those without tinnitus. Similarly medial occipito-temporal gyrus surface area was increased whereas surface area of the inferior frontal opercular gyrus was reduced in those with tinnitus when compared to those without tinnitus. Notably, while past studies have reported that the presence of tinnitus appeared to moderate some of these effects in certain participant groups, our results suggest a more complex relationship between sensory degradation, chronic tinnitus, and brain structure in individuals across the adult lifespan. HighlightsO_LIHearing loss and tinnitus can exacerbate regional brain atrophy in the adult lifespan. C_LIO_LIHigh-frequency hearing loss affects auditory cortex gray matter volume to a larger degree in older age. C_LIO_LIHearing loss may accelerate decline in hippocampal volume by about 4 years. C_LIO_LIChronic subjective tinnitus is associated with a larger volume of cingulate cortex, increased surface area in cingulate cortex and the lingual gyrus, and decreased surface area of frontal operculum compared to controls. C_LIO_LITinnitus-related effects on regional brain atrophy are not modified by the degree of hearing deficits. C_LI

15
Deficits in tail-lift and air-righting reflexes in rats after ototoxicity associate with loss of vestibular type I hair cells

Palou, A.; Tagliabue, M.; Beraneck, M.; Llorens, J.

2026-03-26 neuroscience 10.64898/2026.03.24.712950 medRxiv
Top 0.1%
6.6%
Show abstract

The rat vestibular system plays a critical role in anti-gravity responses such as the tail-lift reflex and the air-righting reflex. In a previous study in male rats, we obtained evidence that these two reflexes depend on the function of non-identical populations of vestibular sensory hair cells (HC). Here, we caused graded lesions in the vestibular system of female rats by exposing the animals to several different doses of an ototoxic chemical, 3,3-iminodipropionitrile (IDPN). After exposure, we assessed the anti-gravity responses of the rats and then assessed the loss of type I HC (HCI) and type II HC (HCII) in the central and peripheral regions of the crista, utricle and saccule. As expected, we recorded a dose-dependent loss of vestibular function and loss of HCs. The relationship between hair cell loss and functional loss was examined using non-linear models fitted by orthogonal distance regression. The results indicated that both the tail-lift reflex and the air-righting reflexes mostly depend on HCI function. However, a different dependency was found on the epithelium triggering the reflex: while the tail-lift response is sensitive to loss of crista and/or utricle HCIs, the air-righting response rather depends on utricular and/or saccular integrity.

16
Multivariate Prediction of Conductive Dysfunction in Well and NICU Newborns using Wideband Acoustic Immittance with Acoustic Reflex Tests

Hunter, L. L.; Feeney, M. P.; Fitzpatrick, D.; Keefe, D. H.

2026-03-15 otolaryngology 10.64898/2026.03.13.26348314 medRxiv
Top 0.1%
6.5%
Show abstract

ObjectivesThe overall goal of this study was to assess tympanometric and ambient wideband acoustic immittance (WAI) tests and wideband acoustic reflex thresholds (ART) in well-baby and newborn intensive care (NICU) cohorts with three specific objectives: 1) Assess predictive accuracy for WBT and ART for conductive dysfunction in ears referring on the first or second stages of newborn hearing screening; 2) Identify inadequate tests likely due to probe blockages or leaks; and 3) Assess prediction models separately for well-baby and NICU screening outcomes. DesignProspective, observational study of full-term (n=514) and premature newborns (n=239) recruited from well-baby and NICU nursery birth hospital newborn hearing screening program. Wideband tympanometry, ambient absorbance, and acoustic reflexes were tested after Stage 1 transient otoacoustic emissions (TEOAE) screening. The reference standard for Pass or Refer groups was initially defined on the stage 1 TEOAE test result. Pass or Refer groups were then reassigned based on the stage 2 screening ABR for those who referred at Stage 1, and all NICU infants. Multivariate models were developed using reflectance and admittance variables to predict conductive dysfunction relative to the screening reference standard in a randomized sub-group of subjects at Stage 1 and Stage 2 screening. Classification accuracy was evaluated on a second, independent sub-group. Individual tests were classified as having inadequate probe fits if they had excessively low values of sound pressure level or susceptance (leak) or absorbance (blockage). ResultsDifferences in ambient absorbance for Pass v. Refer screening groups revealed the greatest differences and effect sizes occurring in frequency bins between 1.4-2 kHz. Screening failure at both Stage 1 and 2 was most accurately predicted by models using ambient absorbance and power level variables at frequencies between 1-2.8 kHz, including ARTs. Tympanometric admittance variables at the positive-pressure tail for frequencies between 1-2.8 kHz in combination with the ART were more accurate predictors than those at peak pressure or the negative-pressure tail. Multivariate models generalized well to an independent group of infants at both Stage 1 and 2 for both the ambient and tympanometric models. Ambient tests revealed more inadequate tests than tympanometric tests, primarily due to blocked probe tips. Exclusion of ears to detect probe leaks or blockages slightly improved the ambient prediction models, but did not affect tympanometric models. ConclusionWideband acoustic reflex tests improved all models for ambient and tympanometric absorbance. Multivariate prediction models developed for WAI tests were repeatable in an independent group of well and NICU infants, suggesting that the results are generalizable to these populations. Detection of probe blockage or leaks slightly improved prediction for ambient measures. Pressurized tests have the advantage of ensuring probe seals due to the need for a hermetic seal, thus are useful to ensure adequate probe insertion.

17
Speech-in-Noise Difficulties in Aminoglycoside Ototoxicity Reflects Combined Afferent and Efferent Dysfunction

Motlagh Zadeh, L.; Izhiman, D.; Blankenship, C. M.; Moore, D. R.; Martin, D. K.; Garinis, A.; Feeney, P.; Hunter, L. R.

2026-03-26 otolaryngology 10.64898/2026.03.23.26348719 medRxiv
Top 0.1%
6.5%
Show abstract

Objectives: Patients with Cystic fibrosis (CF) often receive aminoglycosides (AGs) to manage recurrent pulmonary infections, placing them at risk for ototoxicity. Chronic AG use can lead to complex cochlear damage affecting inner and outer hair cells, the stria vascularis, and spiral ganglion neurons. The greatest damage is typically in the basal cochlear region, which encodes high-frequency hearing, with additional involvement of more apical regions. While extended-high-frequency (EHF) hearing loss (EHFHL; 9-16 kHz) is often the earliest sign of AG ototoxicity, speech in noise (SiN) effects are rarely studied. Our overall hypothesis is that SiN perception difficulties in individuals with CF, treated with AGs, are related to combined cochlear and neural damage, primarily in the EHF range but also in the standard frequency (SF; 0.25-8 kHz) range. Three mechanisms that contribute to SiN perception were evaluated in children and young adults: 1) a primary effect of reduced EHF sensitivity, measured by pure-tone audiometry (PTA) and transient-evoked otoacoustic emissions (TEOAEs); 2) a secondary effect of subclinical damage in the SF range, measured by PTA and TEOAEs; and 3) additional neural effects, measured by middle ear muscle reflex (MEMR) threshold (afferent) and growth functions (efferent).Design:A total of 185 participants were enrolled; 101 individuals with CF treated with intravenous AGs and 84 age and sex-matched Controls without hearing concerns or CF. Assessments included EHF and SF PTA; the Bamford-Kowal-Bench (BKB)-SIN test for SiN perception; double-evoked TEOAEs with chirp stimuli from 0.71 to 14.7 kHz; and ipsilateral and contralateral wideband MEMR thresholds and growth functions using broadband stimuli. Results: Reduced sensitivity at EHFs (PTA, TEOAEs) was not associated with impaired SiN perception in the CF group. SF hearing, regardless of EHF status, was the primary predictor of SiN performance in the CF group. Increased MEMR growth was also significantly associated with poorer SiN in the CF group. Conclusions: In CF, impaired SiN perception was primarily predicted by SF hearing impairment, with additional involvement of the efferent auditory pathway through increased MEMR growth. These results build on prior evidence for efferent neural effects due to ototoxic exposures, supporting both sensory (afferent) and neural (efferent) mechanisms that contribute to listening difficulties in CF. Thus, preventive and intervention strategies should consider these combined mechanisms in people with AG ototoxicity to address their SiN problems.

18
Orca vowels and consonants: convergent spectral structures across cetacean and human speech

Begus, G.; Holt, M.; Wright, B.; Gruber, D. F.

2026-03-02 animal behavior and cognition 10.64898/2026.02.27.708287 medRxiv
Top 0.1%
6.4%
Show abstract

The vocal communication system of orcas (Orcinus orca) has so far been analyzed primarily in terms of the fundamental frequency (F0) modulations, i.e. the frequency of their phonic lips vibration. The calls have been divided into clicks, pulsed calls, whistles and types thereof. By analyzing 61 hours of on-orca acoustic recordings and controlling for the effect of high-frequency components (HFC) and F0, we report structured formant patterns in orca vocalizations including diphthongal trajectories. Broadband spectrogram analysis reveals previously unreported formant patterns that appear independent of F0 and HFC and are hypothesized to result from air sac resonances. This study builds on the recent report of formant structure in vowel- and diphthong-like calls in another cetacean, sperm whales (Physeter macrocephalus). Using linguistic techniques, we further demonstrate that some calls are reminiscent of human consonant-vowel sequences, featuring bursts or abrupt decreases in amplitude. We also show that individual sparsely distributed clicks gradually transition into high frequency tonal calls, which aligns with analysis of sperm whale codas as vocalic pulses. The paper makes methodological contributions to the cetacean communication research by analyzing orca vocalizations with both narrowband and broadband spectrograms. The reported patterns are hypothesized to be actively controlled by whales and may carry communicative information. The spectral patterns shown in this study provide an added dimension to the orca communication system that merits further analysis and demonstrates convergent evolutions of similar phonological features in cetaceans (orca and sperm whale) and human communication systems.

19
Isoflurane preferentially modulates synaptic responses to corticocortical stimulation over thalamocortical stimulation

Wright, S.; Banks, M. I.; Raz, A.

2026-02-11 neuroscience 10.64898/2026.02.09.704944 medRxiv
Top 0.1%
6.4%
Show abstract

ObjectiveTo test the effect of Isoflurane on synaptic transmission of cortico-cortical and thalamocortical projections to the auditory cortex, and investigate how it modulates cortical sensory information processing to produce unconsciousness. MethodsUsing murine auditory thalamocortical brain slices, afferent pathways from the medial geniculate body (MGB) and layer 1 of the proximal cortex were stimulated to evoke excitatory postsynaptic potentials (eEPSPs) in cortical neurons. Whole-cell recordings were made from pyramidal and fast-spiking neurons in layer 2/3 and layer 5. eEPSPs were evaluated along with intrinsic membrane properties in response to stimulation of both pathways with and without isoflurane. ResultsIsoflurane administration resulted in significant eEPSP amplitude reduction following stimulation of both thalamic and cortical pathways, in layer 2/3 (p=0.015, p<0.001) and layer 5 (p<0.001, p<0.001) pyramidal neurons; while it only significantly reduced eEPSP amplitude in fast-spiking interneurons with cortical stimulation (p<0.001). Overall, isoflurane preferentially suppressed synaptic responses to cortico-cortical stimulation compared to thalamocortical (p=0.0002). Under isoflurane, cortico-cortical compared to thalamocortical stimulation evoked eEPSPs with reduced 10-90% rise time in both layer 2/3 and 5 pyramidal neurons, and shorter latency layer 5 neurons. Paired pulse ratio was not changed by isoflurane application, although an interesting loss of depression trend appear in layer 5 pyramidal neurons stimulated by cortical activation. Additional intrinsic neuronal measurements revealed that isoflurane reduced spike threshold significantly in both layer 2/3 and layer 5 neurons, reduced spike latency in layer 2/3 neurons, and input resistance in layer 5 neurons. However, these intrinsic neuronal changes were not seen in fast-spiking interneurons. All isoflurane induced changes were reversible during wash out. ConclusionsApplication of 1% isoflurane to brain slices significantly reduced the amplitudes of eEPSPs and modulated intrinsic neuronal properties. The effects on eEPSP amplitude were greater for cortical stimulation compared to thalamic stimulation. Isoflurane modulated intrinsic neuronal firing properties in pyramidal neurons, but not in fast-spiking interneurons.

20
From Variability to Synchrony: Non-linear Development of Auditory Neural Responses During the First Year of Life

Reisenberger, E.; Schabus, M.; Florea, C.; Angerer, M.; Reimann-Ayiköz, M.; Preiss, J.; Roehm, D.; Heib, D. P. J.; Fazelnia, C.; Ameen, M. S.

2026-03-04 developmental biology 10.64898/2026.02.20.706969 medRxiv
Top 0.1%
6.2%
Show abstract

In humans, the first year of life is characterized by rapid developmental changes, including substantial brain maturation. As a result, neural responses to auditory stimuli undergo marked changes during this period. In this study, we followed 69 infants across their first year of life and recorded high-density electroencephalography (hdEEG) at 2 weeks, 6 months, and 12 months postpartum. Infants were presented with pure beep tones to examine the development of neural responses to auditory stimulation. We analysed event-related potentials (ERPs), inter-trial phase coherence (ITPC), and time-frequency (TF) responses to the beep tones and controlled for arousal state during stimulus presentation. We found that with increasing age, neural responses became more pronounced and showed reduced trial-to-trial variability. Phase synchronization increased from 2 weeks to later developmental stages in a broad low-frequency range (0 to 11 Hz), indicating improved temporal alignment of brain responses over time. However, phase synchronization decreased from 6 to 12 months, suggesting a developmental transition towards more differentiated brain activity. Taken together, these findings demonstrate that auditory maturation during the first year of life follows a non-linear trajectory driven by dynamic changes in neural synchronization, reflecting the progressive refinement of functional neural circuits. Our results thus provide a critical benchmark for understanding the neural dynamics underlying sensory development during this period. Impact StatementLongitudinal high-density EEG recordings reveal that neural responses to auditory stimuli undergo non-linear developmental changes during the first year of life, driven by dynamic shifts in neural synchronization that reflect progressive refinement of auditory neural processing.